Exercise 4: Clustering and classification

This weeks exercise is all about visually exploring statistical data. The exercise will be based on the Boston data provided with the MASS R package.

First, load the MASS library and read in the data:

library(tidyverse)
## -- Attaching packages --------------------------------------- tidyverse 1.3.1 --
## v ggplot2 3.3.5     v purrr   0.3.4
## v tibble  3.1.6     v dplyr   1.0.7
## v tidyr   1.1.4     v stringr 1.4.0
## v readr   2.1.0     v forcats 0.5.1
## -- Conflicts ------------------------------------------ tidyverse_conflicts() --
## x dplyr::filter() masks stats::filter()
## x dplyr::lag()    masks stats::lag()
library(MASS)
## 
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
## 
##     select
BSTN <- MASS::Boston
glimpse(BSTN)
## Rows: 506
## Columns: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829,~
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5, 1~
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87, 7.~
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,~
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.524,~
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.631,~
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9, 9~
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.9505~
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4,~
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311, 31~
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2, 15~
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396.90~
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17.10~
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9, 15~

The data consists of 506 entries for 14 variables that describe the housing value of different suburbs of Boston, New York, the air quality, and willingness of occupants to pay for clean air. A detailed description of each variable can be seen here

summary(BSTN)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

It seems like there is a lot of variation between different towns/suburbs, with the widest ranges being present in crime rates per capita (min: 0.0062, max: 88.98), age (min: 2.9, max: 100), and proportion of residential land zoned for lots over 25,000 sq.ft. (min: 0, max: 100).

Now lets look at the correlations between these variables:

#' create correlation matrix
BSTN_cor <- cor(BSTN) %>%
  round(digits = 2)

#' visualize
corrplot::corrplot(BSTN_cor, 
                   type = "upper", 
                   tl.pos = "d")

As can be observed from the plot, there is high correlation between most of the variables, with the most striking being the positive correlation of nox vs age, negative correlation of nox vs indus, and positive correlation of rad vs tax.

To account for the massive ranges of certain variable and to keep the data comparable, lets scale the variables to center the means around 0.

BSTN_scale <- as.data.frame(scale(BSTN))
summary(BSTN_scale)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

Next, let restructure the crime rate variable to a categorical variable:

#' generate quantiles
crime_qtls <- quantile(BSTN_scale$crim)

#' create categroical variable
BSTN_scale_mod <- BSTN_scale %>%
  dplyr::mutate(crime = cut(crim, 
             breaks = crime_qtls, 
             include.lowest = T, 
             labels = c("low",
                        "mid_low", 
                        "mid_high", 
                        "high"))) %>%
  dplyr::select(-crim)

With crime as the variable of interest, we divide the data set into training and test sets

#' randomly extract indexes for 80% of the data
sub <- sample(nrow(BSTN_scale_mod), 
              size = nrow(BSTN_scale_mod) * 0.8)

#' subset the training set (80%)
train <- BSTN_scale_mod[sub,]

#' subset the test set (20%)
test <- BSTN_scale_mod[-sub,]

#' save correct classes
crt <- test$crime

#' remove crime from test data set
test <- test %>%
  dplyr::select(-crime)

Next, we perform Linear discriminant analysis (LDA) on the training set, which is a dimensionality reduction technique to identify separation within the data based on features/variables.

#' LDA
BSTN_lda <- MASS::lda(crime ~ ., data = train)
BSTN_lda
## Call:
## lda(crime ~ ., data = train)
## 
## Prior probabilities of groups:
##       low   mid_low  mid_high      high 
## 0.2574257 0.2277228 0.2500000 0.2648515 
## 
## Group means:
##                   zn      indus        chas        nox            rm        age
## low       1.04870954 -0.9316141 -0.15875887 -0.8941281  0.4608049517 -0.9237939
## mid_low  -0.08573309 -0.2878126  0.07002747 -0.5965854 -0.1647932481 -0.3296878
## mid_high -0.38450487  0.2148020  0.15646403  0.3964405 -0.0001983126  0.4399647
## high     -0.48724019  1.0170108 -0.05155709  1.0801455 -0.4326398588  0.8102403
##                 dis        rad        tax     ptratio       black       lstat
## low       0.9056070 -0.6969630 -0.7220884 -0.49777187  0.37546036 -0.79353988
## mid_low   0.3771101 -0.5387127 -0.4678248 -0.05527346  0.31100124 -0.10778421
## mid_high -0.4108372 -0.4406135 -0.3189915 -0.26254928  0.06616281  0.09141195
## high     -0.8520009  1.6392096  1.5148289  0.78203563 -0.81349545  0.89028654
##                 medv
## low       0.54060510
## mid_low  -0.02389479
## mid_high  0.10742357
## high     -0.70716111
## 
## Coefficients of linear discriminants:
##                  LD1          LD2         LD3
## zn       0.122502971  0.631584646 -0.92750102
## indus   -0.017416125 -0.342309918  0.36887710
## chas    -0.008003716 -0.014801962  0.21213086
## nox      0.392206627 -0.641786514 -1.40648445
## rm      -0.013161742  0.002411445 -0.13545629
## age      0.235907752 -0.340708652 -0.06962402
## dis     -0.133714962 -0.195263137  0.31541576
## rad      3.330350256  0.801809307  0.12989557
## tax      0.044760302  0.243517399  0.41544565
## ptratio  0.139259628  0.008611557 -0.28186205
## black   -0.105451869  0.045652284  0.11128408
## lstat    0.201455660 -0.284124680  0.33785506
## medv     0.075769686 -0.425486430 -0.17100214
## 
## Proportion of trace:
##    LD1    LD2    LD3 
## 0.9515 0.0365 0.0120
#' define functions for lda arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

#' target classes
tr_crt <- as.numeric(train$crime)

#' plot LDA
plot(BSTN_lda, 
     col = tr_crt, 
     dimen = 2, 
     pch = tr_crt)
lda.arrows(BSTN_lda, myscale = 2)

As we can see from the plot, crime rates separate very well between high and mid-low/low, while there is still some separation between the mid_high and low/mid_low categories. Furthermore, the variables rad, nox, and zn seem to be crucial for the separation.

Now lets predict the values on the test data set

#' predict the classes
BSTN_pred <- predict(BSTN_lda, 
                     newdata = test)

#' compare against the correct classes
table(correct = crt, 
      predicted = BSTN_pred$class)
##           predicted
## correct    low mid_low mid_high high
##   low       12       9        2    0
##   mid_low    7      17       10    0
##   mid_high   0       5       18    2
##   high       0       0        0   20

There were some classification errors in the low class, but they were fairly good for the rest.

Next up, we find the distance between the variables and perform k-means clustering

#' distances
BSTN_dist <- stats::dist(BSTN_scale)

#' summary
summary(BSTN_dist)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970
#' manhattan distance matrix
BSTN_dist_man <- dist(BSTN_scale, 
                      method = 'manhattan')

#' summary
summary(BSTN_dist_man)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.2662  8.4832 12.6090 13.5488 17.7568 48.8618
#' k-means clustering
BSTN_km <- stats::kmeans(BSTN_dist, 
                         centers = 3)

#' visualize
pairs(BSTN_scale, 
                col = BSTN_km$cluster)

Now lets determine the optimal k value for k means clustering

# set.seed
set.seed(211)

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})

# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')

# k-means clustering
km <-kmeans(Boston, 
            centers = 2)

# plot the Boston dataset with clusters
pairs(BSTN_scale, 
      col = km$cluster)

From the q-plot we can see that the most drastic drop in “within cluster sum of squares (WCSS)” is at 2, which indicative of the optimal k value.

Super bonus

model_predictors <- dplyr::select(train, -crime)

# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(BSTN_lda$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% BSTN_lda$scaling
matrix_product <- as.data.frame(matrix_product)


plotly::plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)

The plots generally look similar with high crime rates clearly separated from the other groupings.